gan performance
Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad Samples
We introduce a simple (one line of code) modification to the Generative Adversarial Network (GAN) training algorithm that materially improves results with no increase in computational cost. Through experiments on many different GAN variants, we show that thistop-k update' procedure is a generally applicable improvement. In order to understand the nature of the improvement, we conduct extensive analysis on a simple mixture-of-Gaussians dataset and discover several interesting phenomena. Among these is that, when gradient updates are computed using the worst-scoring batch elements, samples can actually be pushed further away from the their nearest mode. We also apply our method to state-of-the-art GAN models including BigGAN and improve state-of-the-art FID for conditional generation on CIFAR-10 from 9.21 to 8.57.
Review for NeurIPS paper: Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad Samples
Additional Feedback: Post rebuttal I thank the authors for the response. I agree with all reviewers that the method is simple, effective, and generalized to improve GAN methods. However, I think the rebuttal does not address well all my concerns, some of them remain: 1. Regarding the novelty of the paper, using the D scores as the feedback to improve the generator quality is not new. What is new in the paper is the simple way how to use the discriminator scores. However, quite missing substantial discussion and comparison to related works to understand the advantages of the proposed method over the existing works, e.g., stronger improvements or better training time, etc? [*] Metropolis-Hastings Generative Adversarial Networks [**] Your GAN is Secretly an Energy-based Model and You Should use Discriminator Driven Latent Sampling 2. The inconsistency of u value in the experiment is not addressed in the rebuttal.
Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad Samples
We introduce a simple (one line of code) modification to the Generative Adversarial Network (GAN) training algorithm that materially improves results with no increase in computational cost. Through experiments on many different GAN variants, we show that thistop-k update' procedure is a generally applicable improvement. In order to understand the nature of the improvement, we conduct extensive analysis on a simple mixture-of-Gaussians dataset and discover several interesting phenomena. Among these is that, when gradient updates are computed using the worst-scoring batch elements, samples can actually be pushed further away from the their nearest mode. We also apply our method to state-of-the-art GAN models including BigGAN and improve state-of-the-art FID for conditional generation on CIFAR-10 from 9.21 to 8.57.
Effective Shortcut Technique for GAN
Park, Seung, Yoo, Cheol-Hwan, Shin, Yong-Goo
In recent years, generative adversarial network (GAN)-based image generation techniques design their generators by stacking up multiple residual blocks. The residual block generally contains a shortcut, \ie skip connection, which effectively supports information propagation in the network. In this paper, we propose a novel shortcut method, called the gated shortcut, which not only embraces the strength point of the residual block but also further boosts the GAN performance. More specifically, based on the gating mechanism, the proposed method leads the residual block to keep (or remove) information that is relevant (or irrelevant) to the image being generated. To demonstrate that the proposed method brings significant improvements in the GAN performance, this paper provides extensive experimental results on the various standard datasets such as CIFAR-10, CIFAR-100, LSUN, and tiny-ImageNet. Quantitative evaluations show that the gated shortcut achieves the impressive GAN performance in terms of Frechet inception distance (FID) and Inception score (IS). For instance, the proposed method improves the FID and IS scores on the tiny-ImageNet dataset from 35.13 to 27.90 and 20.23 to 23.42, respectively.
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > South Korea > Daejeon > Daejeon (0.04)
- Asia > South Korea > North Chungcheong > Cheongju-si (0.04)
Use of Neural Signals to Evaluate the Quality of Generative Adversarial Network Performance in Facial Image Generation
Wang, Zhengwei, Healy, Graham, Smeaton, Alan F., Ward, Tomas E.
There is a growing interest in using Generative Adversarial Networks (GANs) to produce image content that is indistinguishable from a real image as judged by a typical person. A number of GAN variants for this purpose have been proposed, however, evaluating GANs is inherently difficult because current methods of measuring the quality of the output do not always mirror what is actually perceived by a human. We propose a novel approach that deploys a brain-computer interface to generate a neural score that closely mirrors the behavioral ground truth measured from participants discerning real from synthetic images. In this paper, we first compare the three most widely used metrics in the literature for evaluating GANs in terms of visual quality compared to human judgments. Second, we propose and demonstrate a novel approach using neural signals and rapid serial visual presentation (RSVP) that directly measures a human perceptual response to facial production quality independent of a behavioral response measurement. Finally we show that our neural score is more consistent with human judgment compared to the conventional metrics we evaluated. We conclude that neural signals have potential application for high quality, rapid evaluation of GANs in the context of visual image synthesis.
Is Generator Conditioning Causally Related to GAN Performance?
Odena, Augustus, Buckman, Jacob, Olsson, Catherine, Brown, Tom B., Olah, Christopher, Raffel, Colin, Goodfellow, Ian
Recent work (Pennington et al, 2017) suggests that controlling the entire distribution of Jacobian singular values is an important design consideration in deep learning. Motivated by this, we study the distribution of singular values of the Jacobian of the generator in Generative Adversarial Networks (GANs). We find that this Jacobian generally becomes ill-conditioned at the beginning of training. Moreover, we find that the average (with z from p(z)) conditioning of the generator is highly predictive of two other ad-hoc metrics for measuring the 'quality' of trained GANs: the Inception Score and the Frechet Inception Distance (FID). We test the hypothesis that this relationship is causal by proposing a 'regularization' technique (called Jacobian Clamping) that softly penalizes the condition number of the generator Jacobian. Jacobian Clamping improves the mean Inception Score and the mean FID for GANs trained on several datasets. It also greatly reduces inter-run variance of the aforementioned scores, addressing (at least partially) one of the main criticisms of GANs.
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- (2 more...)